0 - 521 - 87722 - 0 - From Finite Sample to Asymptotic Methods in Statistics
نویسندگان
چکیده
Censoring, 116 Coefficient concentration, 287 confidence, 173 uncertainty, 287 Competing risks, 371 Confidence interval, 13, 173 coverage probability, 173 unbiased, 96 uniformly most accurate, 96 Contingency tables, 274 Convergence almost certain, 120 almost sure, 120, 121 almost sure convergence of series, 164 complete, 120, 129 equivalent, 148 in distribution, 125 in law, 125 in probability, 120, 121 in the rth mean, 120, 124 of moments, 232 stochastic, 14, 119 strong, 14, 120 weak, 14, 120, 125 Cross-product ratio, 286 Data balanced, 4 count, 273 grouped, 3 longitudinal, 5 ordered categorical, 273 repeated measures, 4 Decision rule, 84 admissible, 86 complete class, 86 empirical Bayes, 93 minimax, 86 Decision theory, 13, 83 Distribution F , 26 t-Student, 8, 26 beta, 25 beta-binomial, 97 binomial, 2, 24 bivariate normal, 3 Cauchy, 9, 11, 25, 50 Cauchy type, 227 chi-squared, 25, 26 concave exponential type, 231 conjugate prior, 91 contaminated normal, 51 convex exponential type, 231 Dirichlet, 91 double exponential, 25 error contamination, 57 exponential, 114 exponential family, 45, 317 exponential type, 227 extreme value of the first type, 228 extreme value of the second type, 229 extreme value of the third type, 230 finite lower end point, 153 finite upper end point, 153 gamma, 24 gross error contamination, 65 heavy tails, 57 hypergeometric, 2 improper prior, 92
منابع مشابه
Asymptotic Behaviors of Nearest Neighbor Kernel Density Estimator in Left-truncated Data
Kernel density estimators are the basic tools for density estimation in non-parametric statistics. The k-nearest neighbor kernel estimators represent a special form of kernel density estimators, in which the bandwidth is varied depending on the location of the sample points. In this paper, we initially introduce the k-nearest neighbor kernel density estimator in the random left-truncatio...
متن کاملTesting multivariate uniformity and its applications
Some new statistics are proposed to test the uniformity of random samples in the multidimensional unit cube [0, 1]d (d ≥ 2). These statistics are derived from number-theoretic or quasi-Monte Carlo methods for measuring the discrepancy of points in [0, 1]d. Under the null hypothesis that the samples are independent and identically distributed with a uniform distribution in [0, 1]d, we obtain som...
متن کاملSome properties of Likelihood Ratio Tests in Linear Mixed Models
We calculate the finite sample probability mass-at-zero and the probability of underestimating the true ratio between random effects variance and error variance in a LMM with one variance component. The calculations are expedited by simple matrix diagonalization techniques. One possible application is to compute the probability that the log of the likelihood ratio (LRT), or residual likelihood ...
متن کاملAsymptotic Distributions of Estimators of Eigenvalues and Eigenfunctions in Functional Data
Functional data analysis is a relatively new and rapidly growing area of statistics. This is partly due to technological advancements which have made it possible to generate new types of data that are in the form of curves. Because the data are functions, they lie in function spaces, which are of infinite dimension. To analyse functional data, one way, which is widely used, is to employ princip...
متن کاملUnbiased bootstrap error estimation for linear discriminant analysis
Convex bootstrap error estimation is a popular tool for classifier error estimation in gene expression studies. A basic question is how to determine the weight for the convex combination between the basic bootstrap estimator and the resubstitution estimator such that the resulting estimator is unbiased at finite sample sizes. The well-known 0.632 bootstrap error estimator uses asymptotic argume...
متن کامل